15 research outputs found

    What is the Computational Value of Finite Range Tunneling?

    Full text link
    Quantum annealing (QA) has been proposed as a quantum enhanced optimization heuristic exploiting tunneling. Here, we demonstrate how finite range tunneling can provide considerable computational advantage. For a crafted problem designed to have tall and narrow energy barriers separating local minima, the D-Wave 2X quantum annealer achieves significant runtime advantages relative to Simulated Annealing (SA). For instances with 945 variables, this results in a time-to-99%-success-probability that is ∼108\sim 10^8 times faster than SA running on a single processor core. We also compared physical QA with Quantum Monte Carlo (QMC), an algorithm that emulates quantum tunneling on classical processors. We observe a substantial constant overhead against physical QA: D-Wave 2X again runs up to ∼108\sim 10^8 times faster than an optimized implementation of QMC on a single core. We note that there exist heuristic classical algorithms that can solve most instances of Chimera structured problems in a timescale comparable to the D-Wave 2X. However, we believe that such solvers will become ineffective for the next generation of annealers currently being designed. To investigate whether finite range tunneling will also confer an advantage for problems of practical interest, we conduct numerical studies on binary optimization problems that cannot yet be represented on quantum hardware. For random instances of the number partitioning problem, we find numerically that QMC, as well as other algorithms designed to simulate QA, scale better than SA. We discuss the implications of these findings for the design of next generation quantum annealers.Comment: 17 pages, 13 figures. Edited for clarity, in part in response to comments. Added link to benchmark instance

    Efficient Population Transfer via Non-Ergodic Extended States in Quantum Spin Glass

    Get PDF
    Quantum tunneling has been proposed as a physical mechanism for solving binary optimization problems on a quantum computer because it provides an alternative to simulated annealing by directly connecting deep local minima of the energy landscape separated by large Hamming distances. However, classical simulations using Quantum Monte Carlo (QMC) were found to efficiently simulate tunneling transitions away from local minima if the tunneling is effectively dominated by a single path. We analyze a new computational role of coherent multi-qubit tunneling that gives rise to bands of non-ergodic extended (NEE) quantum states each formed by a superposition of a large number of deep local minima with similar energies. NEE provide a coherent pathway for population transfer (PT) between computational states with similar energies. In this regime, PT cannot be efficiently simulated by QMC. PT can serve as a new quantum subroutine for quantum search, quantum parallel tempering and reverse annealing optimization algorithms. We study PT resulting from quantum evolution under a transverse field of an n-spin system that encodes the energy function E(z) of an optimization problem over the set of bit configurations z. Transverse field is rapidly switched on in the beginning of algorithm, kept constant for sufficiently long time and switched off at the end. Given an energy function of a binary optimization problem and an initial bit-string with atypically low energy, PT protocol searches for other bitstrings at energies within a narrow window around the initial one. We provide an analytical solution for PT in a simple yet nontrivial model: M randomly chosen marked bit-strings are assigned energies E(z) within a narrow strip [-n -W/2, n + W/2], while the rest of the states are assigned energy 0. The PT starts at a marked state and ends up in a superposition of L marked states inside the narrow energy window whose width is smaller than W. The best known classical algorithm for finding another marked state is the exhaustive search. We find that the scaling of a typical PT runtime with n and L is the same as that in the multi-target Grover\u27s quantum search algorithm, except for a factor that is equal to exp(n /(2B^2)) for finite transverse field B >>1. Unlike the Hamiltonians used in analog quantum unstructured search algorithms known so far, the model we consider is non-integrable and the transverse field delocalizes the marked states. As a result, our PT protocol is not exponentially sensitive in n to the weight of the driver Hamiltonian and may be initialized with a computational basis state. We develop the microscopic theory of PT by constructing a down-folded dense Hamiltonian acting in the space of marked states of dimension M. It belongs to the class of preferred basis Levy matrices (PBLM) with heavy-tailed distribution of the off-diagonal matrix elements. Under certain conditions, the band of the marked states splits into minibands of non-ergodic delocalized states. We obtain an explicit form of the heavy-tailed distribution of PT times by solving cavity equations for the ensemble of down-folded Hamiltonians. We study numerically the PT subroutine as a part of quantum parallel tempering algorithm for a number of examples of binary optimization problems on fully connected graphs

    Binary classification with adiabatic quantum optimization

    No full text
    We study the problem of supervised binary classification from the perspective of deploying adiabatic quantum optimization in training. A vast body of prior academic work consisting of both theoretical and numerical studies has indicated that quantum technology promises to provide computational power that may be fundamentally superior to any classical computing methods. Given the abundance of NP-hard optimization problems that naturally arise in learning, it is clear that machine learning can immensely benefit from such an optimization tool. We describe a series of increasingly complex designs that result in computationally hard training problems of combinatorial nature. In return for accepting classical computational hardness, we retain theoretical properties such as maximal sparsity and robustness to label noise, which are otherwise sacrificed by convex methods for the sake of computational efficiency and sound theoretical footing. In order to be compatible with emerging quantum hardware technology, we formalize the training problem as quadratic unconstrained binary optimization. Our initial investigations focus on a simple training formulation with non-convex regularization that conforms to the architecture of existing quantum hardware and makes frugal use of a limited number of available physical qubits. Next, we extend this baseline formulation to a scalable algorithm, QBoost, which is able to train incrementally large-scale classifiers on data sets of practical interest. Further, we derive another algorithm, TotalQBoost, as a theoretically motivated totally corrective boosting algorithm with cardinality penalization that also makes use of quantum optimization. Both QBoost and TotalQBoost perform explicit cardinality regularization, which is the only known way of achieving maximal sparsity in the trained classifiers. We apply QBoost and TotalQBoost to three different real-world computer vision problems and make use of a quantum processor for solving the sequence of discrete optimization problems generated by one of them. Finally, we study a learning formulation with convex regularization and a non-convex loss function, q-loss, specifically designed for robust supervised learning in the presence of label noise as it occurs in practice. For compatibility with quantum hardware we derive the corresponding quadratic binary problem via variational approximation. For all proposed algorithms we compare results on a variety of popular synthetic and natural data sets against a rich selection of existing rival learning formulations
    corecore